20 research outputs found
ForgedAttributes: An Existential Forgery Vulnerability of CMS and PKCS#7 Signatures
This work describes an existential signature forgery vulnerability of the current CMS and PKCS#7 signature standards. The vulnerability results from an ambiguity of how to process the signed message in the signature verification process. Specifically, the absence or presence of the so called SignedAttributes field determines whether the signature message digest receives as input the message directly or the SignedAttributes, a DER-encoded structure which contains a digest of the message. If an attacker takes a CMS or PKCS#7 signed message M which was originally signed with SignedAttributes present, then he can craft a new message M 0 that was never signed by the signer and has the DER-encoded SignedAttributes of the original message as its content and verifies correctly against the original signature of M . Due to the limited flexibility of the forged message and the limited control the attacker has over it, the fraction of vulnerable systems must be assumed to be small but due to the wide deployment of the affected protocols, such instances cannot be excluded. We propose a countermeasure based on attack-detection that prevents the attack reliably
Fast and Secure Root Finding for Code-based Cryptosystems
In this work we analyze five previously published respectively trivial
approaches and two new hybrid variants for the task of finding the roots of the error locator polynomial
during the decryption operation of code-based encryption schemes. We compare
the performance of these algorithms and show that optimizations concerning
finite field element representations
play a key role for the speed of software implementations.
Furthermore, we point out a number of timing attack vulnerabilities that
can arise in root-finding algorithms, some aimed at recovering the message,
others at the secret support. We give experimental results of software
implementations showing that
manifestations of these vulnerabilities are present in straightforward
implementations of most of the root-finding variants presented in this
work.
As a result, we find that one of the variants provides security with respect to
all vulnerabilities as well as competitive computation time for code parameters that minimize the public key size
Efficiency and Implementation Security of Code-based Cryptosystems
This thesis studies efficiency and security problems of implementations of code-based
cryptosystems. These cryptosystems, though not currently used in the field, are of great
scientific interest, since no quantum algorithm is known that breaks them essentially
faster than any known classical algorithm. This qualifies them as cryptographic schemes
for the quantum-computer era, where the currently used cryptographic schemes are
rendered insecure.
Concerning the efficiency of these schemes, we propose a solution for the handling of
the public keys, which are, compared to the currently used schemes, of an enormous size.
Here, the focus lies on resource-constrained devices, which are not capable of storing a
code-based public key of communication partner in their volatile memory. Furthermore,
we show a solution for the decryption without the parity check matrix with a passable
speed penalty. This is also of great importance, since this matrix is of a size that is
comparable to that of the public key. Thus, the employment of this matrix on memory-constrained devices
is not possible or incurs a large cost.
Subsequently, we present an analysis of improvements to the generally most
time-consuming part of the decryption operation, which is the determination of the roots of
the error locator polynomial. We compare a number of known algorithmic variants and
new combinations thereof in terms of running time and memory demands. Though the
speed of pure software implementations must be seen as one of the strong sides of code-based schemes,
the optimisation of their running time on resource-constrained devices
and servers is of great relevance.
The second essential part of the thesis studies the side channel security of these
schemes. A side channel vulnerability is given when an attacker is able to retrieve
information about the secrets involved in a cryptographic operation by measuring physical
quantities such as the running time or the power consumption during that operation.
Specifically, we consider attacks on the decryption operation, which either target the
message or the secret key. In most cases, concrete countermeasures are proposed and
evaluated. In this context, we show a number of timing vulnerabilities that are linked to
the algorithmic variants for the root-finding of the error locator polynomial mentioned
above. Furthermore, we show a timing attack against a vulnerability in the Extended
Euclidean Algorithm that is used to solve the so-called key equation during the decryption
operation, which aims at the recovery of the message. We also present a related
practical power analysis attack. Concluding, we present a practical timing attack that
targets the secret key, which is based on the combination of three vulnerabilities, located
within the syndrome inversion, a further suboperation of the decryption, and the already
mentioned solving of the key equation.
We compare the attacks that aim at the recovery of the message with the analogous
attacks against the RSA cryptosystem and derive a general methodology for the discovery
of the underlying vulnerabilities in cryptosystems with specific properties.
Furthermore, we present two implementations of the code-based McEliece cryptosystem:
a smart card implementation and flexible implementation, which is based on a
previous open-source implementation. The previously existing open-source implementation
was extended to be platform independent and optimised for resource-constrained
devices. In addition, we added all algorithmic variants presented in this thesis, and
we present all relevant performance data such as running time, code size and memory
consumption for these variants on an embedded platform. Moreover, we implemented
all side channel countermeasures developed in this work.
Concluding, we present open research questions, which will become relevant once
efficient and secure implementations of code-based cryptosystems are evaluated by the
industry for an actual application
How to implement the public Key Operations in Code-based Cryptography on Memory-constrained Devices
While it is generally believed that due to their large public
key sizes code based public key schemes cannot be conveniently used
when memory-constrained devices are involved, we propose an approach
for Public Key Infrastructure (PKI) scenarios which totally eliminates
the need to store public keys of communication partners. Instead, all the
necessary computation steps are performed during the transmission of
the key. We show the feasibility of the approach through an example
implementation and give arguments that it will be possible for a smart
card controller to carry out the associated computations to sustain the
transmission rates of possible future high speed contactless interfaces
Prime and Prejudice:Primality Testing Under Adversarial Conditions
This work provides a systematic analysis of primality testing under adversarial conditions, where the numbers being tested for primality are not generated randomly, but instead provided by a possibly malicious party. Such a situation can arise in secure messaging protocols where a server supplies Diffie-Hellman parameters to the peers, or in a secure communications protocol like TLS where a developer can insert such a number to be able to later passively spy on client-server data. We study a broad range of cryptographic libraries and assess their performance in this adversarial setting. As examples of our findings, we are able to construct 2048-bit composites that are declared prime with probability by OpenSSL\u27s primality testing in its default configuration; the advertised performance is . We can also construct 1024-bit composites that always pass the primality testing routine in GNU GMP when configured with the recommended minimum number of rounds. And, for a number of libraries (Cryptlib, LibTomCrypt, JavaScript Big Number, WolfSSL), we can construct composites that always pass the supplied primality tests. We explore the implications of these security failures in applications, focusing on the construction of malicious Diffie-Hellman parameters. We show that, unless careful primality testing is performed, an adversary can supply parameters which on the surface look secure, but where the discrete logarithm problem in the subgroup of order generated by is easy. We close by making recommendations for users and developers. In particular, we promote the Baillie-PSW primality test which is both efficient and conjectured to be robust even in the adversarial setting for numbers up to a few thousand bits
A Smart Card Implementation of the McEliece PKC
International audienceIn this paper we present a smart card implementation of the quantum computer resistant McEliece Public Key Cryptosystem (PKC) on an Infineon SLE76 chip. We describe the main features of the implementation which focuses on performance optimization. We give the resource demands and timings for two sets of security parameters, the higher one being in the secure domain. The timings suggest the usability of the implementation for certain real world applications
An Analysis of OpenSSL\u27s Random Number Generator
In this work we demonstrate various weaknesses of the random number generator (RNG)
in the OpenSSL cryptographic library.
We show how OpenSSL\u27s RNG, knowingly in a low entropy state, potentially leaks low entropy
secrets in its output, which were never intentionally fed to the RNG by
client code, thus posing vulnerabilities even when in the given
usage scenario the low entropy state is respected by the client application.
Turning to the core cryptographic functionality of the RNG,
we show how OpenSSL\u27s functionality for
adding entropy to the RNG state fails to be effectively a mixing function.
If an initial low entropy state of the RNG
was falsely presumed to have 256 bits of entropy based on wrong entropy
estimations, this causes attempts to recover from this state to succeed only in long term
but to fail in short term.
As a result, the entropy level of generated cryptographic keys can be limited to
80 bits, even though thousands of bits of entropy might have been fed to the RNG state
previously. In the same scenario, we demonstrate an attack recovering the RNG
state from later output with an off-line effort between and hash evaluations,
for seeds with an entropy level above 160 bits. We also show
that seed data with an entropy of bits, fed into the RNG, under certain
circumstances, might be recovered from
its output with an effort of hash evaluations.
These results are highly relevant for embedded systems that fail to provide sufficient
entropy through their operating system RNG at boot time and rely on subsequent reseeding of
the OpenSSL RNG.
Furthermore, we identify a
design flaw that limits the entropy of the RNG\u27s output to 240 bits in the general case
even for an initially correctly seeded RNG, despite the fact that a security level of
256 bits is intended